Active Chemical Sensing With Partially Observable Markov Decision Processes

نویسندگان

  • Rakesh Gosangi
  • Ricardo Gutierrez-Osuna
چکیده

We present an active-perception strategy to optimize the temperature program of metal-oxide sensors in real time, as the sensor reacts with its environment. We model the problem as a partially observable Markov decision process (POMDP), where actions correspond to measurements at particular temperatures, and the agent is to find a temperature sequence that minimizes the Bayes risk. We validate the method on a binary classification problem with a simulated sensor. Our results show that the method provides a balance between classification rate and sensing costs.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Partially observable Markov decision processes

For reinforcement learning in environments in which an agent has access to a reliable state signal, methods based on the Markov decision process (MDP) have had many successes. In many problem domains, however, an agent suffers from limited sensing capabilities that preclude it from recovering a Markovian state signal from its perceptions. Extending the MDP framework, partially observable Markov...

متن کامل

Active visual sensing and collaboration on mobile robots using hierarchical POMDPs

A key challenge to widespread deployment of mobile robots in the real-world is the ability to robustly and autonomously sense the environment and collaborate with teammates. Real-world domains are characterized by partial observability, non-deterministic action outcomes and unforeseen changes, making autonomous sensing and collaboration a formidable challenge. This paper poses visionbased sensi...

متن کامل

Nonlinear POMDPs for Active State Tracking with Sensing Costs

Active state tracking is needed in object classification, target tracking, medical diagnosis and estimation of sparse signals among other various applications. Herein, active state tracking of a discrete– time, finite–state Markov chain is considered. Noisy Gaussian observations are dynamically collected by exerting appropriate control over their information content, while incurring a related s...

متن کامل

Approximation Algorithms for Solving Cost Observable Markov Decision Processes

Partial observability is a result of noisy or imperfect sensors that are not able to reveal the real state of the world. Problems that suffer from partial observability have been modelled as Partially Observable Markov Decision Processes (POMDPs). They have been studied by researchers in Operations Research and Artificial Intelligence for the past 30 years. Nevertheless, solving for the optimal...

متن کامل

A POMDP Framework to Find Optimal Inspection and Maintenance Policies via Availability and Profit Maximization for Manufacturing Systems

Maintenance can be the factor of either increasing or decreasing system's availability, so it is valuable work to evaluate a maintenance policy from cost and availability point of view, simultaneously and according to decision maker's priorities. This study proposes a Partially Observable Markov Decision Process (POMDP) framework for a partially observable and stochastically deteriorating syste...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008